Current:Home > NewsDispute over threat of extinction posed by AI looms over surging industry-DB Wealth Institute B2 Expert Reviews
Dispute over threat of extinction posed by AI looms over surging industry
View Date:2025-01-11 12:44:15
While experts disagree about whether AI poses an existential threat, Dan Hendrycks, a researcher and the director of the Center for AI Safety, or CAIS, is among those who believe the technology could destroy humanity in a variety of ways.
A bad actor could gain possession of a future version of generative AI, ask it for instructions on how to make a biological weapon and set off devastation, he told ABC News. Or, he added, the efficiency delivered by AI could force widespread business adoption, leaving the global economy in its thrall; alternatively, it could also worsen the spread of misinformation and disinformation.
The end of humanity, Hendrycks said, is hardly a remote possibility. "If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct," he added.
Experts who lend credence to the threat told ABC News that the massive potential risks require urgent attention and stiff oversight; while skeptics warned that grave forecasts fuel a misunderstanding about the capabilities of AI and distract from current harms caused by the technology.
In recent months, dire warnings about the massive threat posed by AI have ascended from the corridors of computer science departments to the halls of Congress.
An open letter written in May by CIAS warned that AI poses a "risk of extinction" akin to pandemics or nuclear war, featuring signatures from hundreds of researchers and industry leaders like OpenAI CEO Sam Altman and Demis Hassabis, the CEO of Google DeepMind, the tech giant's AI division.
Altman, whose company developed the viral AI sensation Chat-GPT, in May told a Senate subcommittee: "If this technology goes wrong, it can go quite wrong." In an interview with ABC News, in March, Altman said, "I think people should be happy that we are a little bit scared of this."
OpenAI and Google did not immediately respond to ABC News' request for comment.
MORE: AI leaders warn the technology poses 'risk of extinction' like pandemics and nuclear war
Other AI luminaries, however, have balked. Yann LeCun, chief AI scientist at Meta, told the MIT Technology Review that fear of an AI takeover is "preposterously ridiculous." Sarah Myers West, managing director of the nonprofit AI Now Institute, told ABC News: "A lot of this is more rhetoric than grounded analysis."
The divide over the existential threat posed by AI looms over recent advances in the technology as it sweeps across institutions from manufacturing to mass entertainment, prompting disagreement about the pace of development and the focus of possible regulation.
"We're looking to experts to tell us," Jeffrey Sonnenfeld, a professor of management at Yale University who convenes gatherings of top CEOs. "But experts are split on this."
As AI develops, however, an imperative for onlookers is clear, he said: "We can't sit on the sidelines."
Concern about the risks posed by AI have drawn greater attention lately in response to major breakthroughs like ChatGPT, which reached 100 million users within two months of its launch in November.
Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.
"AI in previous instantiations was a largely invisible system," Myers West said. "It wasn't something we interacted with in a tangible way. This has had a really visceral effect on the broader public in that it's contributing to this wave of both excitement and a tremendous amount of anxiety."
MORE: Is AI coming for your job? ChatGPT renews fears
Doomsday forecasts, however, lack granular specifics and overstate the potential for self-awareness to form within generative AI like ChatGPT, which scans text from across the internet and strings words together based on statistical probability, Myers West said.
"At present, essentially the way these systems work is akin to applied statistics, so they don't have any capacity for deeper understanding, don't have any capacity for empathy and certainly not sentience," Myers West added.
But the risk posed by AI stems from its potential to exceed human intelligence rather than mimic it, Stuart Russell, an AI researcher at the University of California, Berkeley who co-authored a study on societal-scale dangers of the technology, told ABC News.
"If you make systems that are more intelligent than humans, they will have more power over the world than we do, just as we have more power over the world than other species on earth," he said.
Acknowledging a lack of specifics in some prominent messages about the extreme risks, such as the open letter released by CAIS in May, Russell said: "Once you get into specifics, you end up with arguments about which is the most plausible." Hendrycks, of CAIS, added: "Since AI touches on many aspects of society, we end up finding there are many, many risk sources."
Key remedies, such as robust government oversight and liability for AI developers, can help deter a range of catastrophic scenarios, Hendrycks said. "We don't need to know exactly what's going to happen to make interventions to reduce risk," he said.
MORE: Can artificial intelligence help stop mass shootings?
To be sure, while acknowledging the risks of AI, experts heralded its potential benefits. Proponents of AI say the technology could increase productivity, automate unpleasant or mundane tasks and afford the opportunity to focus on creative and innovative endeavors. AI has been touted as an aid for endeavors ranging from the fight against climate change to the diagnosis of cancer.
Senate Majority Chuck Schumer, D-N.Y., released a framework last month outlining four pillars that he hopes will guide future bipartisan legislation governing AI: security, accountability, protecting our foundations and explainability.
The framework is not legislative text, and it's not clear how long it will take for Congress to begin putting together legislative proposals. There has not yet been any comprehensive legislation introduced in Congress to deal with regulating AI, though a bicameral group of lawmakers introduced a proposal last month that would create a blue-ribbon commission to study AI's impact.
"We have no choice but to acknowledge that AI's changes are coming, and in many cases are already here. We ignore them at our own peril," Schumer said last month in prepared remarks at the Center for Strategic and International Studies.
President Joe Biden, meanwhile, appeared last month at a roundtable event focused on AI in California, describing artificial intelligence as something that has "enormous promise and its risks."
An effort to ward off hypothetical long-term dangers could distract from present-day damage caused by AI, said Isabelle Jones, campaign outreach manager for Stop Killer Robots, which aims to establish an international agreement prohibiting the use of autonomous weapons.
"I think that to purely focus on the future is to the detriment of the existing harms that are coming about or that there's an immediate risk of," Jones told ABC News.
Policymakers can address current and future dangers at the same time, Russell said, just as they do in combating climate change. "I think the narrative that you can either do one or the other but can't do both is actually poisonous."
Regardless of whether they believe or question forecasts of extreme risk, experts who spoke with ABC News called on the government to regulate the technology.
"My biggest thing is regulation and international coordination," Hendrycks said. Jones cited international accords on issues like nuclear proliferation as a model for reigning in autonomous weapons.
Still, Myers West said, differences could again arise on the issue of specifics. "The devil is in the details," she said.
ABC News' Alison Pecorin contributed reporting.
veryGood! (976)
Related
- Kraft Heinz stops serving school-designed Lunchables because of low demand
- Biden's grandfatherly appeal may be asset overseas at NATO summit
- China's economic growth falls to 3% in 2022 but slowly reviving
- Warming Trends: A Song for the Planet, Secrets of Hempcrete and Butterfly Snapshots
- She's a trans actress and 'a warrior.' Now, this 'Emilia Pérez' star could make history.
- Lessons From The 2011 Debt Ceiling Standoff
- Elizabeth Holmes could serve less time behind bars than her 11-year sentence
- Here's what's at stake in Elon Musk's Tesla tweet trial
- Jessica Simpson's Husband Eric Johnson Steps Out Ringless Amid Split Speculation
- Powerball jackpot grows to $725 million, 7th largest ever
Ranking
- College football Week 12 expert picks for every Top 25 game include SEC showdowns
- How to deal with your insurance company if a hurricane damages your home
- Planes Sampling Air Above the Amazon Find the Rainforest is Releasing More Carbon Than it Stores
- Here's the latest on the NOTAM outage that caused flight delays and cancellations
- About Charles Hanover
- See the Royal Family at King Charles III's Trooping the Colour Celebration
- New Climate Research From a Year-Long Arctic Expedition Raises an Ozone Alarm in the High North
- Covid-19 and Climate Change Will Remain Inextricably Linked, Thanks to the Parallels (and the Denial)
Recommendation
-
Teachers in 3 Massachusetts communities continue strike over pay, paid parental leave
-
Everything Kourtney Kardashian Has Said About Wanting a Baby With Travis Barker
-
Inside Clean Energy: 7 Questions (and Answers) About How Covid-19 is Affecting the Clean Energy Transition
-
Get a First Look at Love Is Blind Season 5 and Find Out When It Premieres
-
Queen Bey and Yale: The Ivy League university is set to offer a course on Beyoncé and her legacy
-
In a Dry State, Farmers Use Oil Wastewater to Irrigate Their Fields, but is it Safe?
-
Over 100 Nations at COP26 Pledge to Cut Global Methane Emissions by 30 Percent in Less Than a Decade
-
A Delta in Distress